Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MPI workarounds for aarch64 #999

Open
wants to merge 3 commits into
base: master
Choose a base branch
from
Open

Conversation

abussy
Copy link
Contributor

@abussy abussy commented Aug 30, 2024

This PR contains MPI workarounds for aarch64. As a result, DFTK can now be run on ARM architecture with full MPI support. These workarounds are meant to be temporary, until there is an upstream fix (whenever that happens, issue JuliaParallel/MPI.jl#404 was already opened 4 years ago).

The procedure was the following: :mpi tagged tests were run on an ARM machine, until a crash would occur. A set of specific MPI reduction methods were then written for the custom type involved. Some other custom types might pop up in the future. Adding a new method for them should be trivial.

I know this is rather clunky, and it inflates the code size and complexity. However, I think it is crucial to fully support DFTK on ARM architecture, especially now that many modern HPC clusters run on it (e.g. Alps at CSCS).

@mfherbst
Copy link
Member

mfherbst commented Sep 9, 2024

Thanks. Two remarks:

  • We used to have a workarounds folder. I suggest to put the code there.
  • Did you report these fixes upstream or is this not easy to do ?

@abussy
Copy link
Contributor Author

abussy commented Sep 19, 2024

I moved the fixes to the workarounds directory for better consistency.

I did not explicitly report these fixes upstream, but my mention of the original issue in my initial message ensures that this PR appears there for reference.


# Vec3{T} must be cast to Vector{T} before MPI reduction
function mpi_sum!(arr::Vector{Vec3{T}}, comm::MPI.Comm) where{T}
n = length(arr)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can this not be solved by a reinterpret(reshape, ... )? Then there is no copy

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this one is tricky to get without copies, because SVector are immutable. So even if I use something like new_aarr = reshape(reinterpret(T, arr), 3, :), I still could not call mpi_sum!(). And assuming I call mpi_sum() instead, I also end up with a copy.

# utility function to cast a Dual type to an array containing a value and the partial diffs
function dual_array(dual::ForwardDiff.Dual{T, U, V}) where{T, U, V}
dual_array = [ForwardDiff.value(dual)]
append!(dual_array, collect(ForwardDiff.partials(dual)))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I vaguely recall there are again ways to do thiy without copy. Check the forwarddiff rules for array operations (multiplication neefs to be done equally to all partials)

end

# utility function that casts back an array to a Dual type, based on a template Dual
function new_dual(dual_array, template)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again Forwarddifc may have sth here

mpi_sum!(array, comm)
offset = 0
for i in 1:length(dual)
dual[i] = new_dual(array[offset+1:offset+lengths[i]], dual[i])
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Index operation does a copy by default. Use views (@views)

@abussy
Copy link
Contributor Author

abussy commented Sep 20, 2024

In the last commit, the amount of copies and/or temporary arrays in the MPI workarounds for aarch64 was minimized. There are unfortunately a few things that cannot be avoided, e.g. the creation of a standard type arrays for MPI communication, and some copies when accessing elements of a Vector{ForwardDiff.Dual}.

@mfherbst
Copy link
Member

@abussy There seems to be some update in the issue on the MPI.jl side as well. Maybe that helps.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants